Search Results: "cree"

10 October 2023

Matthias Klumpp: How to indicate device compatibility for your app in MetaInfo data

At the moment I am hard at work putting together the final bits for the AppStream 1.0 release (hopefully to be released this month). The new release comes with many new new features, an improved developer API and removal of most deprecated things (so it carefully breaks compatibility with very old data and the previous C API). One of the tasks for the upcoming 1.0 release was #481 asking about a formal way to distinguish Linux phone applications from desktop applications. AppStream infamously does not support any is-for-phone label for software components, instead the decision whether something is compatible with a device is based the the device s capabilities and the component s requirements. This allows for truly adaptive applications to describe their requirements correctly, and does not lock us into form factors going into the future, as there are many and the feature range between a phone, a tablet and a tiny laptop is quite fluid. Of course the match to current device capabilities check does not work if you are a website ranking phone compatibility. It also does not really work if you are a developer and want to know which devices your component / application will actually be considered compatible with. One goal for AppStream 1.0 is to have its library provide more complete building blocks to software centers. Instead of just a here s the data, interpret it according to the specification API, libappstream now interprets the specification for the application and provides API to handle most common operations like checking device compatibility. For developers, AppStream also now implements a few virtual chassis configurations , to roughly gauge which configurations a component may be compatible with. To test the new code, I ran it against the large Debian and Flatpak repositories to check which applications are considered compatible with what chassis/device type already. The result was fairly disastrous, with many applications not specifying compatibility correctly (many do, but it s by far not the norm!). Which brings me to the actual topic of this blog post: Very few seem to really know how to mark an application compatible with certain screen sizes and inputs! This is most certainly a matter of incomplete guides and good templates, so maybe this post can help with that a bit:

The ultimate cheat-sheet to mark your app chassis-type compatible As a quick reminder, compatibility is indicated using AppStream s relations system: A requires relation indicates that the system will not run at all or will run terribly if the requirement is not met. If the requirement is not met, it should not be installable on a system. A recommends relation means that it would be advantageous to have the recommended items, but it s not essential to run the application (it may run with a degraded experience without the recommended things though). And a supports relation means a given interface/device/control/etc. is supported by this application, but the application may work completely fine without it.

I have a desktop-only application A desktop-only application is characterized by needing a larger screen to fit the application, and requiring a physical keyboard and accurate mouse input. This type is assumed by default if no capabilities are set for an application, but it s better to be explicit. This is the metadata you need:
<component type="desktop-application">
  <id>org.example.desktopapp</id>
  <name>DesktopApp</name>
  [...]
  <requires>
    <display_length>768</display_length>
    <control>keyboard</control>
    <control>pointing</control>
  </requires>
  [...]
</component>
With this requires relation, you require a small-desktop sized screen (at least 768 device-independent pixels (dp) on its smallest edge) and require a keyboard and mouse to be present / connectable. Of course, if your application needs more minimum space, adjust the requirement accordingly. Note that if the requirement is not met, your application may not be offered for installation.
Note: Device-independent / logical pixels One logical pixel (= device independent pixel) roughly corresponds to the visual angle of one pixel on a device with a pixel density of 96 dpi (for historical X11 reasons) and a distance from the observer of about 52 cm, making the physical pixel about 0.26 mm in size. When using logical pixels as unit, they might not always map to exact physical lengths as their exact size is defined by the device providing the display. They do however accurately depict the maximum amount of pixels that can be drawn in the depicted direction on the device s display space. AppStream always uses logical pixels when measuring lengths in pixels.

I have an application that works on mobile and on desktop / an adaptive app Adaptive applications have fewer hard requirements, but a wide range of support for controls and screen sizes. For example, they support touch input, unlike desktop apps. An example MetaInfo snippet for these kind of apps may look like this:
<component type="desktop-application">
  <id>org.example.adaptive_app</id>
  <name>AdaptiveApp</name>
  [...]
  <requires>
    <display_length>360</display_length>
  </requires>
  <supports>
    <control>keyboard</control>
    <control>pointing</control>
    <control>touch</control>
  </supports>
  [...]
</component>
Unlike the pure desktop application, this adaptive application requires a much smaller lowest display edge length, and also supports touch input, in addition to keyboard and mouse/touchpad precision input.

I have a pure phone/table app Making an application a pure phone application is tricky: We need to mark it as compatible with phones only, while not completely preventing its installation on non-phone devices (even though its UI is horrible, you may want to test the app, and software centers may allow its installation when requested explicitly even if they don t show it by default). This is how to achieve that result:
<component type="desktop-application">
  <id>org.example.phoneapp</id>
  <name>PhoneApp</name>
  [...]
  <requires>
    <display_length>360</display_length>
  </requires>
  <recommends>
    <display_length compare="lt">1280</display_length>
    <control>touch</control>
  </recommends>
  [...]
</component>
We require a phone-sized display minimum edge size (adjust to a value that is fit for your app!), but then also recommend the screen to have a smaller edge size than a larger tablet/laptop, while also recommending touch input and not listing any support for keyboard and mouse. Please note that this blog post is of course not a comprehensive guide, so if you want to dive deeper into what you can do with requires/recommends/suggests/supports, you may want to have a look at the relations tags described in the AppStream specification.

Validation It is still easy to make mistakes with the system requirements metadata, which is why AppStream 1.0 will provide more commands to check MetaInfo files for system compatibility. Current pre-1.0 AppStream versions already have an is-satisfied command to check if the application is compatible with the currently running operating system:
:~$ appstreamcli is-satisfied ./org.example.adaptive_app.metainfo.xml
Relation check for: */*/*/org.example.adaptive_app/*
Requirements:
   Unable to check display size: Can not read information without GUI toolkit access.
Recommendations:
   No recommended items are set for this software.
Supported:
   Physical keyboard found.
   Pointing device (e.g. a mouse or touchpad) found.
   This software supports touch input.
In addition to this command, AppStream 1.0 will introduce a new one as well: check-syscompat. This command will check the component against libappstream s mock system configurations that define a most common (whatever that is at the time) configuration for a respective chassis type. If you pass the --details flag, you can even get an explanation why the component was considered or not considered for a specific chassis type:
:~$ appstreamcli check-syscompat --details ./org.example.phoneapp.metainfo.xml
Chassis compatibility check for: */*/*/org.example.phoneapp/*
Desktop:
   Incompatible
   recommends: This software recommends a display with its shortest edge
   being << 1280 px in size, but the display of this device has 1280 px.
   recommends: This software recommends a touch input device.
Laptop:
   Incompatible
   recommends: This software recommends a display with its shortest edge 
   being << 1280 px in size, but the display of this device has 1280 px.
   recommends: This software recommends a touch input device.
Server:
   Incompatible
   requires: This software needs a display for graphical content.
   recommends: This software needs a display for graphical content.
   recommends: This software recommends a touch input device.
Tablet:
   Compatible (100%)
Handset:
   Compatible (100%)
I hope this is helpful for people. Happy metadata writing!

Russ Allbery: Review: Chilling Effect

Review: Chilling Effect, by Valerie Valdes
Series: Chilling Effect #1
Publisher: Harper Voyager
Copyright: September 2019
Printing: 2020
ISBN: 0-06-287724-0
Format: Kindle
Pages: 420
Chilling Effect is a space opera, kind of; more on the genre classification in a moment. It is the first volume of a series, although it reaches a reasonable conclusion on its own. It was Valerie Valdes's first novel. Captain Eva Innocente's line of work used to be less than lawful, following in the footsteps of her father. She got out of that life and got her own crew and ship. Now, the La Sirena Negra and its crew do small transport jobs for just enough money to stay afloat. Or, maybe, a bit less than that, when the recipient of a crate full of psychic escape-artist cats goes bankrupt before she can deliver it and get paid. It's a marginal and tenuous life, but at least she isn't doing anything shady. Then the Fridge kidnaps her sister. The Fridge is a shadowy organization of extortionists whose modus operandi is to kidnap a family member of their target, stuff them in cryogenic suspension, and demand obedience lest the family member be sold off as indentured labor after a few decades as a popsicle. Eva will be given missions that she and her crew have to perform. If she performs them well, she will pay off the price of her sister's release. Eventually. Oh, and she's not allowed to tell anyone. I found it hard to place the subgenre of this novel more specifically than comedy-adventure. The technology fits space opera: there are psychic cats, pilots who treat ships as extensions of their own body, brain parasites, a random intergalactic warlord, and very few attempts to explain anything with scientific principles. However, the stakes aren't on the scale that space opera usually goes for. Eva and her crew aren't going to topple governments or form rebellions. They're just trying to survive in a galaxy full of abusive corporations, dodgy clients, and the occasional alien who requires you to carry extensive documentation to prove that you can't be hunted for meat. It is also, as you might guess from that description, occasionally funny. That part of the book didn't mesh for me. Eva is truly afraid for her sister, and some of the events in the book are quite sinister, but the antagonist is an organization called The Fridge that puts people in fridges. Sexual harassment in a bar turns into obsessive stalking by a crazed intergalactic warlord who frequently interrupts the plot by randomly blasting things with his fleet, which felt like something from Hitchhiker's Guide to the Galaxy. The stakes for Eva, and her frustrations at being dragged back into a life she escaped, felt too high for the wacky, comic descriptions of the problems she gets into. My biggest complaint, though, is that the plot is driven by people not telling other people critical information they should know. Eva is keeping major secrets from her crew for nearly the entire book. Other people are also keeping information from Eva. There is a romance subplot driven almost entirely by both parties refusing to talk to each other about the existence of a romance subplot. For some people, this is catnip, but it's one of my least favorite fictional tropes and I found much of the book both frustrating and stressful. Fictional characters keeping important secrets from each other apparently raises my blood pressure. One of the things I did like about this book is that Eva is Hispanic and speaks like it. She resorts to Spanish frequently for curses, untranslatable phrases, aphorisms, derogatory comments, and similar types of emotional communication that don't feel right in a second language. Most of the time one can figure out the meaning from context, but Valdes doesn't feel obligated to hold the reader's hand and explain everything. I liked that. I think this approach is more viable in these days of ebook readers that can attempt translations on demand, and I think it does a lot to make Eva feel like a real person. I think the characters are the best part of this book, once one gets past the frustration of their refusal to talk to each other. Eva and the alien ship engineer get the most screen time, but Pink, Eva's honest-to-a-fault friend, was probably my favorite character. I also really enjoyed Min, the ship pilot whose primary goal is to be able to jack into the ship and treat it as her body, and otherwise doesn't particularly care about the rest of the plot as long as she gets paid. A lot of books about ship crews like this one lean hard into found family. This one felt more like a group of coworkers, with varying degrees of friendship and level of interest in their shared endeavors, but without the too-common shorthand of making the less-engaged crew members either some type of villain or someone who needs to be drawn out and turned into a best friend or love interest. It's okay for a job to just be a job, even if it's one where you're around the same people all the time. People who work on actual ships do it all the time. This is a half-serious, half-comic action romp that turned out to not be my thing, but I can see why others will enjoy it. Be prepared for a whole lot of communication failures and an uneven emotional tone, but if you're looking for a space-ships-and-aliens story that doesn't take itself very seriously and has some vague YA vibes, this may work for you. Followed by Prime Deceptions, although I didn't like this well enough to read on. Rating: 6 out of 10

1 October 2023

Paul Wise: FLOSS Activities September 2023

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review
  • Spam: reported 2 Debian bug reports
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:
    • approved fzf lame lsd termshark vifm
    • rejected orthanc (private data), gpr/orthanc (Windows), qrencode (random QR codes), weboob-qt (chess website)

Administration
  • Debian IRC: fix #debian-pkg-security topic/metadata
  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors The SWH work was sponsored. All other work was done on a volunteer basis.

30 September 2023

Ian Jackson: DKIM: rotate and publish your keys

If you are an email system administrator, you are probably using DKIM to sign your outgoing emails. You should be rotating the key regularly and automatically, and publishing old private keys. I have just released dkim-rotate 1.0; dkim-rotate is a tool to do this key rotation and publication. If you are an email user, your email provider ought to be doing this. If this is not done, your emails are non-repudiable , meaning that if they are leaked, anyone (eg, journalists, haters) can verify that they are authentic, and prove that to others. This is not desirable (for you). Non-repudiation of emails is undesirable This problem was described at some length in Matthew Green s article Ok Google: please publish your DKIM secret keys. Avoiding non-repudiation sounds a bit like lying. After all, I m advising creating a situation where some people can t verify that something is true, even though it is. So I m advocating casting doubt. Crucially, though, it s doubt about facts that ought to be private. When you send an email, that s between you and the recipient. Normally you don t intend for anyone, anywhere, who happens to get a copy, to be able to verify that it was really you that sent it. In practical terms, this verifiability has already been used by journalists to verify stolen emails. Associated Press provide a verification tool. Advice for all email users As a user, you probably don t want your emails to be non-repudiable. (Other people might want to be able to prove you sent some email, but your email system ought to serve your interests, not theirs.) So, your email provider ought to be rotating their DKIM keys, and publishing their old ones. At a rough guess, your provider probably isn t :-(. How to tell by looking at email headers A quick and dirty way to guess is to have a friend look at the email headers of a message you sent. (It is important that the friend uses a different email provider, since often DKIM signatures are not applied within a single email system.) If your friend sees a DKIM-Signature header then the message is DKIM signed. If they don t, then it wasn t. Most email traversing the public internet is DKIM signed nowadays; so if they don t see the header probably they re not looking using the right tools, or they re actually on the same email system as you. In messages signed by a system running dkim-rotate, there will also be a header about the key rotation, to notify potential verifiers of the situation. Other systems that avoid non-repudiation-through-DKIM might do something similar. dkim-rotate s header looks like this:
DKIM-Signature-Warning: NOTE REGARDING DKIM KEY COMPROMISE
 https://www.chiark.greenend.org.uk/dkim-rotate/README.txt
 https://www.chiark.greenend.org.uk/dkim-rotate/ae/aeb689c2066c5b3fee673355309fe1c7.pem
But an email system might do half of the job of dkim-rotate: regularly rotating the key would cause the signatures of old emails to fail to verify, which is a good start. In that case there probably won t be such a header. Testing verification of new and old messages You can also try verifying the signatures. This isn t entirely straightforward, especially if you don t have access to low-level mail tooling. Your friend will need to be able to save emails as raw whole headers and body, un-decoded, un-rendered. If your friend is using a traditional Unix mail program, they should save the message as an mbox file. Otherwise, ProPublica have instructions for attaching and transferring and obtaining the raw email. (Scroll down to How to Check DKIM and ARC .) Checking that recent emails are verifiable Firstly, have your friend test that they can in fact verify a DKIM signature. This will demonstrate that the next test, where the verification is supposed to fail, is working properly and fails for the right reasons. Send your friend a test email now, and have them do this on a Linux system:
    # save the message as test-email.mbox
    apt install libmail-dkim-perl # or equivalent on another distro
    dkimproxy-verify <test-email.mbox
You should see output containing something like this:
    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: pass
    ...
If the output ontains verify result: fail (body has been altered) then probably your friend didn t manage to faithfully save the unalterered raw message. Checking old emails cannot be verified When you both have that working, have your friend find an older email of yours, from (say) month ago. Perform the same steps. Hopefully they will see something like this:
    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: fail (bad RSA signature)
or maybe
    verify result: invalid (public key: not available)
This indicates that this old email can no longer be verified. That s good: it means that anyone who steals a copy, can t verify it either. If it s leaked, the journalist who receives it won t know it s genuine and unmodified; they should then be suspicious. If your friend sees verify result: pass, then they have verified that that old email of yours is genuine. Anyone who had a copy of the mail can do that. This is good for email thieves, but not for you. For email admins: announcing dkim-rotate 1.0 I have been running dkim-rotate 0.4 on my infrastructure, since last August. and I had entirely forgotten about it: it has run flawlessly for a year. I was reminded of the topic by seeing DKIM in other blog posts. Obviously, it is time to decreee that dkim-rotate is 1.0. If you re a mail system administrator, your users are best served if you use something like dkim-rotate. The package is available in Debian stable, and supports Exim out of the box, but other MTAs should be easy to support too, via some simple ad-hoc scripting. Limitation of this approach Even with this key rotation approach, emails remain nonrepudiable for a short period after they re sent - typically, a few days. Someone who obtains a leaked email very promptly, and shows it to the journalist (for example) right away, can still convince the journalist. This is not great, but at least it doesn t apply to the vast bulk of your email archive. There are possible email protocol improvements which might help, but they re quite out of scope for this article.
Edited 2023-10-01 00:20 +01:00 to fix some grammar


comment count unavailable comments

Fran ois Marier: Things I do after uploading a new package to Debian

There are a couple of things I tend to do after packaging a piece of software for Debian, filing an Intent To Package bug and uploading the package. This is both a checklist for me and (hopefully) a way to inspire other maintainers to go beyond the basic package maintainer duties as documented in the Debian Developer's Reference. If I've missed anything, please leave an comment or send me an email!

Salsa for collaborative development To foster collaboration and allow others to contribute to the packaging, I upload my package to a new subproject on Salsa. By doing this, I enable other Debian contributors to make improvements and propose changes via merge requests. I also like to upload the project logo in the settings page (i.e. https://salsa.debian.org/debian/packagename/edit) since that will show up on some dashboards like the Package overview.

Launchpad for interacting with downstream Ubuntu users While Debian is my primary focus, I also want to keep an eye on how my package is doing on derivative distributions like Ubuntu. To do this, I subscribe to bugs related to my package on Launchpad. Ubuntu bugs are rarely Ubuntu-specific and so I will often fix them in Debian. I also set myself as the answer contact on Launchpad Answers since these questions are often the sign of a Debian or a lack of documentation. I don't generally bother to fix bugs on Ubuntu directly though since I've not had much luck with packages in universe lately. I'd rather not spend much time preparing a package that's not going to end up being released to users as part of a Stable Release Update. On the other hand, I have succesfully requested simple Debian syncs when an important update was uploaded after the Debian Import Freeze.

Screenshots and tags I take screenshots of my package and upload them on https://screenshots.debian.net to help users understand what my package offers and how it looks. I believe that these screenshots end up in software "stores" type of applications. Similarly, I add tags to my package using https://debtags.debian.org. I'm not entirely sure where these tags are used, but they are visible from apt show packagename.

Monitoring Upstream Releases Staying up-to-date with upstream releases is one of the most important duties of a software packager. There are a lot of different ways that upstream software authors publicize their new releases. Here are some of the things I do to monitor these releases:
  • I have a cronjob which run uscan once a day to check for new upstream releases using the information specified in my debian/watch files:
      0 12 * * 1-5   francois  test -e /home/francois/devel/deb && HTTPS_PROXY= https_proxy= uscan --report /home/francois/devel/deb   true
    
  • I subscribe to the upstream project's releases RSS feed, if available. For example, I subscribe to the GitHub tags feed for git-secrets and Launchpad announcements for email-reminder.
  • If the upstream project maintains an announcement mailing list, I subscribe to it (e.g. rkhunter-announce or tor release announcements).
When nothing else is available, I write a cronjob that downloads the upstream changelog once a day and commits it to a local git repo:
#!/bin/bash
pushd /home/francois/devel/zlib-changelog > /dev/null
wget --quiet -O ChangeLog.txt https://zlib.net/ChangeLog.txt   exit 1
git diff
git commit -a -m "Updated changelog" > /dev/null
popd > /dev/null
This sends me a diff by email when a new release is added (and no emails otherwise).

29 September 2023

Scarlett Gately Moore: KDE: Another Busy Week! KDE neon, Debian, Snaps Oh My!

KDE Plasma 6KDE Plasma 6
I would like to welcome you to my revamped site. It is still a work in progress, so please be patient while I work out the kinks! I have also explained a bit more about myself in my About Me page for those that may have questions about my homesteader lifestyle. Check it out when you have time. My site is mostly my adventures in packaging software in Linux in a variety of formats ( mostly Debian and Ubuntu Snaps containerized packages ). This keeps me very busy, as folks don t realize the importance of packaging. Without it, applications remain in source code form which isn t very usable by the users! While turning the source code into something user friendly we often run into issues and work with upstream ( I am a very strong believer in upstream first ) to resolve any issues. This makes for a better user experience and less buggy software. Workarounds are very hard to maintain and thus fixing it right the first time is the best path! With this said, while I am not strong in any one programming language ( Well maybe Ruby from my CI tooling background ) I am versed in many languages, as I have to understand the code that I am filing bug reports for! We have to have a strong knowledge of being able to understand build failures, debug runtime failures and most importantly we have to be able to fix them, or find the resources to assist in fixing them. As most of you know I am KDE s biggest fan ( There is nothing wrong with Gnome, its a great platform ). So a big portion of my work is dedicated to KDE. A fantastic tool for working on my KDE packaging has been KDE Neon! With the developer version I have all the tools necessary to debug and fix issues that arise. There is also the added bonus of living on the edge and finding out runtime issues right away! That is enough about me for now and on to my weekly round up! KDE neon: Carlos ( check out his new blog! https://www.ethicalconstruct.au/dotclear_blog/ ) and I have been very busy with another round of KDE applications making the move to Qt6. We have finished KDE PIM and KDE Games in Neon/unstable! I have worked out issues with print-manager and re-enabled it in experimental as it s qt6 development is still happening in kf6 branch. Instructions here: https://blog.neon.kde.org/2023/08/04/announcing-kde-neon-experimental/ Fixed issues with a broken kscreenlocker and missing window decorations. You can now safely leave your computer and not worry about that dreaded black screen. Debian: I have uploaded the newest squashfuse to unstable. I have uploaded another NEW dependency for bubblegum golang-github-alecthomas-mango-kong-dev Ubuntu Snaps: This week continues working closely with Jarred Wilson of Canonical in getting his Qt6 content snap in shape for use with my KDE Frameworks 6 snap ( an essential snap to move forward with our next generation Qt6 applications and of course the Plasma snap. I spent some time debugging the neochat snap and fixed some QML issues, but I am now facing issues with wayland. It now works fine for those of us still on X11. I will continue working out wayland. Thank you! I rely on donations to upkeep my everyday living and so far thanks to each and every one of you I have survived almost a full year! It has been scary from time to time, but I am surviving. Until my snap project goes through I must rely on the kindness of my supporters. The proceeds of my donations goes to the following: I have joined the kool kids and moved to Donorbox for donations. Donate I still have Gofundme for those that don t want to signup for yetanotherdonationplatform. https://gofund.me/b8b69e54

24 September 2023

Thomas Goirand: Searching for a Ryzen 9, 16 cores, small laptop

The new 7945HX CPU from AMD is currently the most powerful. I d love to have one of them, to replace the now aging 6 core Xeon that I ve been using for more than 5 years. So, I ve been searching for a laptop with that CPU. Absolutely all of the laptops I found with this CPU also embed a very powerful RTX 40 0 series GPU, that I have no use: I don t play games, and I don t do AI. I just want something that builds Debian packages fast (like Ceph, that takes more than 1h to build for me ). The more cores I get, the faster all OpenStack unit tests are running too (stestr does a moderately good job at spreading the tests to all cores). That d be ok if I had to pay more for a GPU that I don t need, and I would have deal with the annoyance of the NVidia driver, if only I could find something with a correct size. But I can only find 16 or bigger laptops, that wont fit in my scooter back case (most of the time, these laptops have an 17 inch screen: that s a way too big). Currently, I found: If one of the readers of this post find a smaller laptop with a 7945HX CPU, please let me know! Even better if I can get rid of the expensive NVidia GPU.

12 September 2023

Jo Shields: Building a NAS

The status quo Back in 2015, I bought an off-the-shelf NAS, a QNAP TS-453mini, to act as my file store and Plex server. I had previously owned a Synology box, and whilst I liked the Synology OS and experience, the hardware was underwhelming. I loaded up the successor QNAP with four 5TB drives in RAID10, and moved all my files over (after some initial DoA drive issues were handled).
QNAP TS-453mini product photoQNAP TS-453mini product photo
That thing has been in service for about 8 years now, and it s been a mixed bag. It was definitely more powerful than the predecessor system, but it was clear that QNAP s OS was not up to the same standard as Synology s perhaps best exemplified by HappyGet 2 , the QNAP webapp for downloading videos from streaming services like YouTube, whose icon is a straight rip-off of StarCraft 2. On its own, meaningless but a bad omen for overall software quality
The logo for QNAP HappyGet 2 and Blizzard's Starcraft 2 side by sideThe logo for QNAP HappyGet 2 and Blizzard s StarCraft 2 side by side
Additionally, the embedded Celeron processor in the NAS turned out to be an issue for some cases. It turns out, when playing back videos with subtitles, most Plex clients do not support subtitles properly instead they rely on the Plex server doing JIT transcoding to bake the subtitles directly into the video stream. I discovered this with some Blu-Ray rips of Game of Thrones some episodes would play back fine on my smart TV, but episodes with subtitled Dothraki speech would play at only 2 or 3 frames per second. The final straw was a ransomware attack, which went through all my data and locked every file below a 60MiB threshold. Practically all my music gone. A substantial collection of downloaded files, all gone. Some of these files had been carried around since my college days digital rarities, or at least digital detritus I felt a real sense of loss at having to replace. This episode was caused by a ransomware targeting specific vulnerabilities in the QNAP OS, not an error on my part. So, I decided to start planning a replacement with:
  • A non-garbage OS, whilst still being a NAS-appliance type offering (not an off-the-shelf Linux server distro)
  • Full remote management capabilities
  • A small form factor comparable to off-the-shelf NAS
  • A powerful modern CPU capable of transcoding high resolution video
  • All flash storage, no spinning rust
At the time, no consumer NAS offered everything (The Asustor FS6712X exists now, but didn t when this project started), so I opted to go for a full DIY rather than an appliance not the first time I ve jumped between appliances and DIY for home storage.

Selecting the core of the system There aren t many companies which will sell you a small motherboard with IPMI. Supermicro is a bust, so is Tyan. But ASRock Rack, the server division of third-tier motherboard vendor ASRock, delivers. Most of their boards aren t actually compliant Mini-ITX size, they re a proprietary Deep Mini-ITX with the regular screw holes, but 40mm of extra length (and a commensurately small list of compatible cases). But, thankfully, they do have a tiny selection of boards without the extra size, and I stumbled onto the X570D4I-2T, a board with an AMD AM4 socket and the mature X570 chipset. This board can use any AMD Ryzen chip (before the latest-gen Ryzen 7000 series); has built in dual 10 gigabit ethernet; IPMI; four (laptop-sized) RAM slots with full ECC support; one M.2 slot for NVMe SSD storage; a PCIe 16x slot (generally for graphics cards, but we live in a world of possibilities); and up to 8 SATA drives OR a couple more NVMe SSDs. It s astonishingly well featured, just a shame it costs about $450 compared to a good consumer-grade Mini ITX AM4 board costing less than half that. I was so impressed with the offering, in fact, that I crowed about it on Mastodon and ended up securing ASRock another sale, with someone else looking into a very similar project to mine around the same timespan. The next question was the CPU. An important feature of a system expected to run 24/7 is low power, and AM4 chips can consume as much as 130W under load, out of the box. At the other end, some models can require as little as 35W under load the OEM-only GE suffix chips, which are readily found for import on eBay. In their PRO variant, they also support ECC (all non-G Ryzen chips support ECC, but only Pro G chips do). The top of the range 8 core Ryzen 7 PRO 5750GE is prohibitively expensive, but the slightly weaker 6 core Ryzen 5 PRO 5650GE was affordable, and one arrived quickly from Hong Kong. Supplemented with a couple of cheap 16 GiB SODIMM sticks of DDR4 PC-3200 direct from Micron for under $50 a piece, that left only cooling as an unsolved problem to get a bootable test system. The official support list for the X570D4I-2T only includes two rackmount coolers, both expensive and hard to source. The reason for such a small list is the non standard cooling layout of the board instead of an AM4 hole pattern with the standard plastic AM4 retaining clips, it has an Intel 115x hole pattern with a non-standard backplate (Intel 115x boards have no backplate, the stock Intel 115x cooler attaches to the holes with push pins). As such every single cooler compatibility list excludes this motherboard. However, the backplate is only secured with a mild glue with minimal pressure and a plastic prying tool it can be removed, giving compatibility with any 115x cooler (which is basically any CPU cooler for more than a decade). I picked an oversized low profile Thermalright AXP120-X67 hoping that its 120mm fan would cool the nearby MOSFETs and X570 chipset too.
Thermalright AXP120-X67, AMD Ryzen 5 PRO 5650GE, ASRock Rack X570D4I-2T, all assembled and running on a flat surface

Testing up to this point Using a spare ATX power supply, I had enough of a system built to explore the IPMI and UEFI instances, and run MemTest86 to validate my progress. The memory test ran without a hitch and confirmed the ECC was working, although it also showed that the memory was only running at 2933 MT/s instead of the rated 3200 MT/s (a limit imposed by the motherboard, as higher speeds are considered overclocking). The IPMI interface isn t the best I ve ever used by a long shot, but it s minimum viable and allowed me to configure the basics and boot from media entirely via a Web browser.
Memtest86 showing test progress, taken from IPMI remote control window
One sad discovery, however, which I ve never seen documented before, on PCIe bifurcation. With PCI Express, you have a number of lanes which are allocated in groups by the motherboard and CPU manufacturer. For Ryzen prior to Ryzen 7000, that s 16 lanes in one slot for the graphics card; 4 lanes in one M.2 connector for an SSD; then 4 lanes connecting the CPU to the chipset, which can offer whatever it likes for peripherals or extra lanes (bottlenecked by that shared 4x link to the CPU, if it comes down to it). It s possible, with motherboard and CPU support, to split PCIe groups up for example an 8x slot could be split into two 4x slots (eg allowing two NVMe drives in an adapter card NVME drives these days all use 4x). However with a Cezanne Ryzen with integrated graphics, the 16x graphics card slot cannot be split into four 4x slots (ie used for for NVMe drives) the most bifurcation it allows is 8x4x4x, which is useless in a NAS.
Screenshot of PCIe 16x slot bifurcation options in UEFI settings, taken from IPMI remote control window
As such, I had to abandon any ideas of an all-NVMe NAS I was considering: the 16x slot split into four 4x, combined with two 4x connectors fed by the X570 chipset, to a total of 6 NVMe drives. 7.6TB U.2 enterprise disks are remarkably affordable (cheaper than consumer SATA 8TB drives), but alas, I was locked out by my 5650GE. Thankfully I found out before spending hundreds on a U.2 hot swap bay. The NVMe setup would be nearly 10x as fast as SATA SSDs, but at least the SATA SSD route would still outperform any spinning rust choice on the market (including the fastest 10K RPM SAS drives)

Containing the core The next step was to pick a case and power supply. A lot of NAS cases require an SFX (rather than ATX) size supply, so I ordered a modular SX500 unit from Silverstone. Even if I ended up with a case requiring ATX, it s easy to turn an SFX power supply into ATX, and the worst result is you have less space taken up in your case, hardly the worst problem to have. That said, on to picking a case. There s only one brand with any cachet making ITX NAS cases, Silverstone. They have three choices in an appropriate size: CS01-HS, CS280, and DS380. The problem is, these cases are all badly designed garbage. Take the CS280 as an example, the case with the most space for a CPU cooler. Here s how close together the hotswap bay (right) and power supply (left) are:
Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome
With actual cables connected, the cable clearance problem is even worse:
Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome
Remember, this is the best of the three cases for internal layout, the one with the least restriction on CPU cooler height. And it s garbage! Total hot garbage! I decided therefore to completely skip the NAS case market, and instead purchase a 5.25 -to-2.5 hot swap bay adapter from Icy Dock, and put it in an ITX gamer case with a 5.25 bay. This is no longer a served market 5.25 bays are extinct since nobody uses CD/DVD drives anymore. The ones on the market are really new old stock from 2014-2017: The Fractal Design Core 500, Cooler Master Elite 130, and Silverstone SUGO 14. Of the three, the Fractal is the best rated so I opted to get that one however it seems the global supply of new old stock fully dried up in the two weeks between me making a decision and placing an order leaving only the Silverstone case. Icy Dock have a selection of 8-bay 2.5 SATA 5.25 hot swap chassis choices in their ToughArmor MB998 series. I opted for the ToughArmor MB998IP-B, to reduce cable clutter it requires only two SFF-8611-to-SF-8643 cables from the motherboard to serve all eight bays, which should make airflow less of a mess. The X570D4I-2T doesn t have any SATA ports on board, instead it has two SFF-8611 OCuLink ports, each supporting 4 PCI Express lanes OR 4 SATA connectors via a breakout cable. I had hoped to get the ToughArmor MB118VP-B and run six U.2 drives, but as I said, the PCIe bifurcation issue with Ryzen G chips meant I wouldn t be able to run all six bays successfully.
NAS build in Silverstone SUGO 14, mid build, panels removed
Silverstone SUGO 14 from the front, with hot swap bay installed

Actual storage for the storage server My concept for the system always involved a fast boot/cache drive in the motherboard s M.2 slot, non-redundant (just backups of the config if the worst were to happen) and separate storage drives somewhere between 3.8 and 8 TB each (somewhere from $200-$350). As a boot drive, I selected the Intel Optane SSD P1600X 58G, available for under $35 and rated for 228 years between failures (or 11,000 complete drive rewrite cycles). So, on to the big expensive choice: storage drives. I narrowed it down to two contenders: new-old-stock Intel D3-S4510 3.84TB enterprise drives, at about $200, or Samsung 870 QVO 8TB consumer drives, at about $375. I did spend a long time agonizing over the specification differences, the ZFS usage reports, the expected lifetime endurance figures, but in reality, it came down to price $1600 of expensive drives vs $3200 of even more expensive drives. That s 27TB of usable capacity in RAID-Z1, or 23TB in RAID-Z2. For comparison, I m using about 5TB of the old NAS, so that s a LOT of overhead for expansion.
Storage SSD loaded into hot swap sled

Booting up Bringing it all together is the OS. I wanted an appliance NAS OS rather than self-administering a Linux distribution, and after looking into the surrounding ecosystems, decided on TrueNAS Scale (the beta of the 2023 release, based on Debian 12).
TrueNAS Dashboard screenshot in browser window
I set up RAID-Z1, and with zero tuning (other than enabling auto-TRIM), got the following performance numbers:
IOPSBandwidth
4k random writes19.3k75.6 MiB/s
4k random reads36.1k141 MiB/s
Sequential writes 2300 MiB/s
Sequential reads 3800 MiB/s
Results using fio parameters suggested by Huawei
And for comparison, the maximum theoretical numbers quoted by Intel for a single drive:
IOPSBandwidth
4k random writes16k?
4k random reads90k?
Sequential writes 280 MiB/s
Sequential reads 560 MiB/s
Numbers quoted by Intel SSD successors Solidigm.
Finally, the numbers reported on the old NAS with four 7200 RPM hard disks in RAID 10:
IOPSBandwidth
4k random writes4301.7 MiB/s
4k random reads800632 MiB/s
Sequential writes 311 MiB/s
Sequential reads 566 MiB/s
Performance seems pretty OK. There s always going to be an overhead to RAID. I ll settle for the 45x improvement on random writes vs. its predecessor, and 4.5x improvement on random reads. The sequential write numbers are gonna be impacted by the size of the ZFS cache (50% of RAM, so 16 GiB), but the rest should be a reasonable indication of true performance. It took me a little while to fully understand the TrueNAS permissions model, but I finally got Plex configured to access data from the same place as my SMB shares, which have anonymous read-only access or authenticated write access for myself and my wife, working fine via both Linux and Windows. And that s it! I built a NAS. I intend to add some fans and more RAM, but that s the build. Total spent: about $3000, which sounds like an unreasonable amount, but it s actually less than a comparable Synology DiskStation DS1823xs+ which has 4 cores instead of 6, first-generation AMD Zen instead of Zen 3, 8 GiB RAM instead of 32 GiB, no hardware-accelerated video transcoding, etc. And it would have been a whole lot less fun!
The final system, powered up
(Also posted on PCPartPicker)

John Goerzen: A Maze of Twisty Little Pixels, All Tiny

Two years ago, I wrote Managing an External Display on Linux Shouldn t Be This Hard. Happily, since I wrote that post, most of those issues have been resolved. But then you throw HiDPI into the mix and it all goes wonky. If you re running X11, basically the story is that you can change the scale factor, but it only takes effect on newly-launched applications (which means a logout/in because some of your applications you can t really re-launch). That is a problem if, like me, you sometimes connect an external display that is HiDPI, sometimes not, or your internal display is HiDPI but others aren t. Wayland is far better, supporting on-the-fly resizes quite nicely. I ve had two devices with HiDPI displays: a Surface Go 2, and a work-issued Thinkpad. The Surface Go 2 is my ultraportable Linux tablet. I use it sparingly at home, and rarely with an external display. I just put Gnome on it, in part because Gnome had better on-screen keyboard support at the time, and left it at that. On the work-issued Thinkpad, I really wanted to run KDE thanks to its tiling support (I wound up using bismuth with it). KDE was buggy with Wayland at the time, so I just stuck with X11 and ran my HiDPI displays at lower resolutions and lived with the fuzziness. But now that I have a Framework laptop with a HiDPI screen, I wanted to get this right. I tried both Gnome and KDE. Here are my observations with both: Gnome I used PaperWM with Gnome. PaperWM is a tiling manager with a unique horizontal ribbon approach. It grew on me; I think I would be equally at home, or maybe even prefer it, to my usual xmonad-style approach. Editing the active window border color required editing ~/.local/share/gnome-shell/extensions/paperwm@hedning:matrix.org/stylesheet.css and inserting background-color and border-color items in the paperwm-selection section. Gnome continues to have an absolutely terrible picture for configuring things. It has no less than four places to make changes (Settings, Tweaks, Extensions, and dconf-editor). In many cases, configuration for a given thing is split between Settings and Tweaks, and sometimes even with Extensions, and then there are sometimes options that are only visible in dconf. That is, where the Gnome people have even allowed something to be configurable. Gnome installs a power manager by default. It offers three options: performance, balanced, and saver. There is no explanation of the difference between them. None. What is it setting when I change the pref? A maximum frequency? A scaling governor? A balance between performance and efficiency cores? Not only that, but there s no way to tell it to just use performance when plugged in and balanced or saver when on battery. In an issue about adding that, a Gnome dev wrote We re not going to add a preference just because you want one . KDE, on the other hand, aside from not mucking with your system s power settings in this way, has a nice panel with on AC and on battery and you can very easily tweak various settings accordingly. The hostile attitude from the Gnome developers in that thread was a real turnoff. While Gnome has excellent support for Wayland, it doesn t (directly) support fractional scaling. That is, you can set it to 100%, 200%, and so forth, but no 150%. Well, unless you manage to discover that you can run gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']" first. (Oh wait, does that make a FIFTH settings tool? Why yes it does.) Despite its name, that allows you to select fractional scaling under Wayland. For X11 apps, they will be blurry, a problem that is optional under KDE (more on that below). Gnome won t show the battery life time remaining on the task bar. Yikes. An extension might work in some cases. Not only that, but the Gnome battery icon frequently failed to indicate AC charging when AC was connected, a problem that didn t exist on KDE. Both Gnome and KDE support night light (warmer color temperatures at night), but Gnome s often didn t change when it should have, or changed on one display but not the other. The appindicator extension is pretty much required, as otherwise a number of applications (eg, Nextcloud) don t have their icon display anywhere. It does, however, generate a significant amount of log spam. There may be a fix for this. Unlike KDE, which has a nice inobtrusive popup asking what to do, Gnome silently automounts USB sticks when inserted. This is often wrong; for instance, if I m about to dd a Debian installer to it, I definitely don t want it mounted. I learned this the hard way. It is particularly annoying because in a GUI, there is no reason to mount a drive before the user tries to access it anyhow. It looks like there is a dconf setting, but then to actually mount a drive you have to open up Files (because OF COURSE Gnome doesn t have a nice removable-drives icon like KDE does) and it s a bunch of annoying clicks, and I didn t want to use the GUI file manager anyway. Same for unmounting; two clicks in KDE thanks to the task bar icon, but in Gnome you have to open up the file manager, unmount the drive, close the file manager again, etc. The ssh agent on Gnome doesn t start up for a Wayland session, though this is easily enough worked around. The reason I completely soured on Gnome is that after using it for awhile, I noticed my laptop fans spinning up. One core would be constantly busy. It was busy with a kworker events task, something to do with sound events. Logging out would resolve it. I believe it to be a Gnome shell issue. I could find no resolution to this, and am unwilling to tolerate the decreased battery life this implies. The Gnome summary: it looks nice out of the box, but you quickly realize that this is something of a paper-thin illusion when you try to actually use it regularly. KDE The KDE experience on Wayland was a little bit opposite of Gnome. While with Gnome, things start out looking great but you realize there are some serious issues (especially battery-eating), with KDE things start out looking a tad rough but you realize you can trivially fix them and wind up with a very solid system. Compared to Gnome, KDE never had a battery-draining problem. It will show me estimated battery time remaining if I want it to. It will do whatever I want it to when I insert a USB drive. It doesn t muck with my CPU power settings, and lets me easily define on AC vs on battery settings for things like suspend when idle. KDE supports fractional scaling, to any arbitrary setting (even with the gsettings thing above, Gnome still only supports it in 25% increments). Then the question is what to do with X11-only applications. KDE offers two choices. The first is Scaled by the system , which is also the only option for Gnome. With that setting, the X11 apps effectively run natively at 100% and then are scaled up within Wayland, giving them a blurry appearance on HiDPI displays. The advantage is that the scaling happens within Wayland, so the size of the app will always be correct even when the Wayland scaling factor changes. The other option is Apply scaling themselves , which uses native X11 scaling. This lets most X11 apps display crisp and sharp, but then if the system scaling changes, due to limitations of X11, you ll have to restart the X apps to get them to be the correct size. I appreciate the choice, and use Apply scaling by themselves because only a few of my apps aren t Wayland-aware. I did encounter a few bugs in KDE under Wayland: sddm, the display manager, would be slow to stop and cause a long delay on shutdown or reboot. This seems to be a known issue with sddm and Wayland, and is easily worked around by adding a systemd TimeoutStopSec. Konsole, the KDE terminal emulator, has weird display artifacts when using fractional scaling under Wayland. I applied some patches and rebuilt Konsole and then all was fine. The Bismuth tiling extension has some pretty weird behavior under Wayland, but a 1-character patch fixes it. On Debian, KDE mysteriously installed Pulseaudio instead of Debian s new default Pipewire, but that was easily fixed as well (and Pulseaudio also works fine). Conclusions I m sticking with KDE. Given that I couldn t figure out how to stop Gnome from deciding to eat enough battery to make my fan come on, the decision wasn t hard. But even if it weren t for that, I d have gone with KDE. Once a couple of things were patched, the experience is solid, fast, and flawless. Emacs (my main X11-only application) looks great with the self-scaling in KDE. Gimp, which I use occasionally, was terrible with the blurry scaling in Gnome. Update: Corrected the gsettings command

11 September 2023

John Goerzen: For the First Time In Years, I m Excited By My Computer Purchase

Some decades back, when I d buy a new PC, it would unlock new capabilities. Maybe AGP video, or a PCMCIA slot, or, heck, sound. Nowadays, mostly new hardware means things get a bit faster or less crashy, or I have some more space for files. It s good and useful, but sorta meh. Not this purchase. Cory Doctorow wrote about the Framework laptop in 2021:
There s no tape. There s no glue. Every part has a QR code that you can shoot with your phone to go to a service manual that has simple-to-follow instructions for installing, removing and replacing it. Every part is labeled in English, too! The screen is replaceable. The keyboard is replaceable. The touchpad is replaceable. Removing the battery and replacing it takes less than five minutes. The computer actually ships with a screwdriver.
Framework had been on my radar for awhile. But for various reasons, when I was ready to purchase, I didn t; either the waitlist was long, or they didn t have the specs I wanted. Lately my aging laptop with 8GB RAM started OOMing (running out of RAM). My desktop had developed a tendency to hard hang about once a month, and I researched replacing it, but the cost was too high to justify. But when I looked into the Framework, I thought: this thing could replace both. It is a real shift in perspective to have a laptop that is nearly as upgradable as a desktop, and can be specced out to exactly what I wanted: 2TB storage and 64GB RAM. And still cheaper than a Macbook or Thinkpad with far lower specs, because the Framework uses off-the-shelf components as much as possible. Cory Doctorow wrote, in The Framework is the most exciting laptop I ve ever broken:
The Framework works beautifully, but it fails even better Framework has designed a small, powerful, lightweight machine it works well. But they ve also designed a computer that, when you drop it, you can fix yourself. That attention to graceful failure saved my ass.
I like small laptops, so I ordered the Framework 13. I loaded it up with the 64GB RAM and 2TB SSD I wanted. Frameworks have four configurable ports, which are also hot-swappable. I ordered two USB-C, one USB-A, and one HDMI. I put them in my preferred spots (one USB-C on each side for easy docking and charging). I put Debian on it, and it all Just Worked. Perfectly. Now, I orderd the DIY version. I hesitated about this I HATE working with laptops because they re all so hard, even though I KNEW this one was different but went for it, because my preferred specs weren t available in a pre-assembled model. I m glad I did that, because assembly was actually FUN. I got my box. I opened it. There was the bottom shell with the motherboard and CPU installed. Here are the RAM sticks. There s the SSD. A minute or two with each has them installed. Put the bezel on the screen, attach the keyboard it has magnets to guide it into place and boom, ready to go. Less than 30 minutes to assemble a laptop nearly from scratch. It was easier than assembling most desktops. So now, for the first time, my main computing device is a laptop. Rather than having a desktop and a laptop, I just have a laptop. I ll be able to upgrade parts of it later if I want to. I can rearrange the ports. And I can take all my most important files with me. I m quite pleased!

8 September 2023

Valhalla's Things: Banners and Signs

Posted on September 8, 2023
I forgot to write down the details back when it happened, but now that the surprise has been delivered I can write about it. A triangular fabric banner, black with a reflective grey border, and a penguin outline where part of the outline is in the shape of Lake Como screenprinted in white and light blue. Some time ago, I decided to make a small banner with the GL-Como penguin for a friend, because reasons. However, this friend has a big problem, he, well, is from Pisa (no, I m not from Leghorn, why do you ask?), and I had a screen printing kit, openclipart and no inhibitions. Three fabric banners: one is the one mentioned above, two are square with a yellow corded border, a yellow triangle and a tower of Pisa in black in the middle. The yellow triangles aren't perfectly flat yellow, but somewhat ruined, one more than the other. So, with the encouragement of a few friends who were in the secret, this happened. In two copies, because the first attempt at the print had issues. And yesterday we finally met that friend again, gave him all of the banners, and no violence happened, but he liked them :D An ISO 7071-style triangle warning sign with a simplified tower of Pisa in black on yellow background. If somebody is interested, the source image I used is on openclipart, with links to all of the sources I ve used. I don t remember exactly how it happened, but when I was working on the Pisani sign I also stumbled on the no dogs sign and decided that the world needed a mandatory cat sign, and well, here is the full set (all images are a link to the openclipart page). ISO 7071  no dogs  sign, a black dog on white background with a red circle with a diagonal line. ISO 7071-style  mandatory cats  sign, a white cat on blue circle background. ISO 7071-style  mandatory dogs  sign, a white dog on blue circle background. ISO 7071-style  no cats , a black cat on white background with a red circle with a diagonal line.

21 August 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 1)

TL;DR: Color is a visual perception. Human eyes can detect a broader range of colors than any devices in the graphics chain. Since each device can generate, capture or reproduce a specific subset of colors and tones, color management controls color conversion and calibration across devices to ensure a more accurate and consistent color representation. We can expose a GPU-accelerated display color management pipeline to support this process and enhance results, and this is what we are doing on Linux to improve color management on Gamescope/SteamDeck. Even with the challenges of being external developers, we have been working on mapping AMD GPU color capabilities to the Linux kernel color management interface, which is a combination of DRM and AMD driver-specific color properties. This more extensive color management pipeline includes pre-defined Transfer Functions, 1-Dimensional LookUp Tables (1D LUTs), and 3D LUTs before and after the plane composition/blending.
The study of color is well-established and has been explored for many years. Color science and research findings have also guided technology innovations. As a result, color in Computer Graphics is a very complex topic that I m putting a lot of effort into becoming familiar with. I always find myself rereading all the materials I have collected about color space and operations since I started this journey (about one year ago). I also understand how hard it is to find consensus on some color subjects, as exemplified by all explanations around the 2015 online viral phenomenon of The Black and Blue Dress. Have you heard about it? What is the color of the dress for you? So, taking into account my skills with colors and building consensus, this blog post only focuses on GPU hardware capabilities to support color management :-D If you want to learn more about color concepts and color on Linux, you can find useful links at the end of this blog post.

Linux Kernel, show me the colors ;D DRM color management interface only exposes a small set of post-blending color properties. Proposals to enhance the DRM color API from different vendors have landed the subsystem mailing list over the last few years. On one hand, we got some suggestions to extend DRM post-blending/CRTC color API: DRM CRTC 3D LUT for R-Car (2020 version); DRM CRTC 3D LUT for Intel (draft - 2020); DRM CRTC 3D LUT for AMD by Igalia (v2 - 2023); DRM CRTC 3D LUT for R-Car (v2 - 2023). On the other hand, some proposals to extend DRM pre-blending/plane API: DRM plane colors for Intel (v2 - 2021); DRM plane API for AMD (v3 - 2021); DRM plane 3D LUT for AMD - 2021. Finally, Simon Ser sent the latest proposal in May 2023: Plane color pipeline KMS uAPI, from discussions in the 2023 Display/HDR Hackfest, and it is still under evaluation by the Linux Graphics community. All previous proposals seek a generic solution for expanding the API, but many seem to have stalled due to the uncertainty of matching well the hardware capabilities of all vendors. Meanwhile, the use of AMD color capabilities on Linux remained limited by the DRM interface, as the DCN 3.0 family color caps and mapping diagram below shows the Linux/DRM color interface without driver-specific color properties [*]: Bearing in mind that we need to know the variety of color pipelines in the subsystem to be clear about a generic solution, we decided to approach the issue from a different perspective and worked on enabling a set of Driver-Specific Color Properties for AMD Display Drivers. As a result, I recently sent another round of the AMD driver-specific color mgmt API. For those who have been following the AMD driver-specific proposal since the beginning (see [RFC][V1]), the main new features of the latest version [v2] are the addition of pre-blending Color Transformation Matrix (plane CTM) and the differentiation of Pre-defined Transfer Functions (TF) supported by color blocks. For those who just got here, I will recap this work in two blog posts. This one describes the current status of the AMD display driver in the Linux kernel/DRM subsystem and what changes with the driver-specific properties. In the next post, we go deeper to describe the features of each color block and provide a better picture of what is available in terms of color management for Linux.

The Linux kernel color management API and AMD hardware color capabilities Before discussing colors in the Linux kernel with AMD hardware, consider accessing the Linux kernel documentation (version 6.5.0-rc5). In the AMD Display documentation, you will find my previous work documenting AMD hardware color capabilities and the Color Management Properties. It describes how AMD Display Manager (DM) intermediates requests between the AMD Display Core component (DC) and the Linux/DRM kernel interface for color management features. It also describes the relevant function to call the AMD color module in building curves for content space transformations. A subsection also describes hardware color capabilities and how they evolve between versions. This subsection, DC Color Capabilities between DCN generations, is a good starting point to understand what we have been doing on the kernel side to provide a broader color management API with AMD driver-specific properties.

Why do we need more kernel color properties on Linux? Blending is the process of combining multiple planes (framebuffers abstraction) according to their mode settings. Before blending, we can manage the colors of various planes separately; after blending, we have combined those planes in only one output per CRTC. Color conversions after blending would be enough in a single-plane scenario or when dealing with planes in the same color space on the kernel side. Still, it cannot help to handle the blending of multiple planes with different color spaces and luminance levels. With plane color management properties, userspace can get a more accurate representation of colors to deal with the diversity of color profiles of devices in the graphics chain, bring a wide color gamut (WCG), convert High-Dynamic-Range (HDR) content to Standard-Dynamic-Range (SDR) content (and vice-versa). With a GPU-accelerated display color management pipeline, we can use hardware blocks for color conversions and color mapping and support advanced color management. The current DRM color management API enables us to perform some color conversions after blending, but there is no interface to calibrate input space by planes. Note that here I m not considering some workarounds in the AMD display manager mapping of DRM CRTC de-gamma and DRM CRTC CTM property to pre-blending DC de-gamma and gamut remap block, respectively. So, in more detail, it only exposes three post-blending features:
  • DRM CRTC de-gamma: used to convert the framebuffer s colors to linear gamma;
  • DRM CRTC CTM: used for color space conversion;
  • DRM CRTC gamma: used to convert colors to the gamma space of the connected screen.

AMD driver-specific color management interface We can compare the Linux color management API with and without the driver-specific color properties. From now, we denote driver-specific properties with the AMD prefix and generic properties with the DRM prefix. For visual comparison, I bring the DCN 3.0 family color caps and mapping diagram closer and present it here again: Mixing AMD driver-specific color properties with DRM generic color properties, we have a broader Linux color management system with the following features exposed by properties in the plane and CRTC interface, as summarized by this updated diagram: The blocks highlighted by red lines are the new properties in the driver-specific interface developed by me (Igalia) and Joshua (Valve). The red dashed lines are new links between API and AMD driver components implemented by us to connect the Linux/DRM interface to AMD hardware blocks, mapping components accordingly. In short, we have the following color management properties exposed by the DRM/AMD display driver:
  • Pre-blending - AMD Display Pipe and Plane (DPP):
    • AMD plane de-gamma: 1D LUT and pre-defined transfer functions; used to linearize the input space of a plane;
    • AMD plane CTM: 3x4 matrix; used to convert plane color space;
    • AMD plane shaper: 1D LUT and pre-defined transfer functions; used to delinearize and/or normalize colors before applying 3D LUT;
    • AMD plane 3D LUT: 17x17x17 size with 12 bit-depth; three dimensional lookup table used for advanced color mapping;
    • AMD plane blend/out gamma: 1D LUT and pre-defined transfer functions; used to linearize back the color space after 3D LUT for blending.
  • Post-blending - AMD Multiple Pipe/Plane Combined (MPC):
    • DRM CRTC de-gamma: 1D LUT (can t be set together with plane de-gamma);
    • DRM CRTC CTM: 3x3 matrix (remapped to post-blending matrix);
    • DRM CRTC gamma: 1D LUT + AMD CRTC gamma TF; added to take advantage of driver pre-defined transfer functions;
Note: You can find more about AMD display blocks in the Display Core Next (DCN) - Linux kernel documentation, provided by Rodrigo Siqueira (Linux/AMD display developer) in a 2021-documentation series. In the next post, I ll revisit this topic, explaining display and color blocks in detail.

How did we get a large set of color features from AMD display hardware? So, looking at AMD hardware color capabilities in the first diagram, we can see no post-blending (MPC) de-gamma block in any hardware families. We can also see that the AMD display driver maps CRTC/post-blending CTM to pre-blending (DPP) gamut_remap, but there is post-blending (MPC) gamut_remap (DRM CTM) from newer hardware versions that include SteamDeck hardware. You can find more details about hardware versions in the Linux kernel documentation/AMDGPU Product Information. I needed to rework these two mappings mentioned above to provide pre-blending/plane de-gamma and CTM for SteamDeck. I changed the DC mapping to detach stream gamut remap matrixes from the DPP gamut remap block. That means mapping AMD plane CTM directly to DPP/pre-blending gamut remap block and DRM CRTC CTM to MPC/post-blending gamut remap block. In this sense, I also limited plane CTM properties to those hardware versions with MPC/post-blending gamut_remap capabilities since older versions cannot support this feature without clashes with DRM CRTC CTM. Unfortunately, I couldn t prevent conflict between AMD plane de-gamma and DRM plane de-gamma since post-blending de-gamma isn t available in any AMD hardware versions until now. The fact is that a post-blending de-gamma makes little sense in the AMD color pipeline, where plane blending works better in a linear space, and there are enough color blocks to linearize content before blending. To deal with this conflict, the driver now rejects atomic commits if users try to set both AMD plane de-gamma and DRM CRTC de-gamma simultaneously. Finally, we had no other clashes when enabling other AMD driver-specific color properties for our use case, Gamescope/SteamDeck. Our main work for the remaining properties was understanding the data flow of each property, the hardware capabilities and limitations, and how to shape the data for programming the registers - AMD color block capabilities (and limitations) are the topics of the next blog post. Besides that, we fixed some driver bugs along the way since it was the first Linux use case for most of the new color properties, and some behaviors are only exposed when exercising the engine. Take a look at the Gamescope/Steam Deck Color Pipeline[**], and see how Gamescope uses the new API to manage color space conversions and calibration (please click on the image for a better view): In the next blog post, I ll describe the implementation and technical details of each pre- and post-blending color block/property on the AMD display driver. * Thank Harry Wentland for helping with diagrams, color concepts and AMD capabilities. ** Thank Joshua Ashton for providing and explaining Gamescope/Steam Deck color pipeline. *** Thanks to the Linux Graphics community - explicitly Harry, Joshua, Pekka, Simon, Sebastian, Siqueira, Alex H. and Ville - to all the learning during this Linux DRM/AMD color journey. Also, Carlos and Tomas for organizing the 2023 Display/HDR Hackfest where we have a great and immersive opportunity to discuss Color & HDR on Linux.

Russ Allbery: Review: Some Desperate Glory

Review: Some Desperate Glory, by Emily Tesh
Publisher: Tordotcom
Copyright: 2023
ISBN: 1-250-83499-6
Format: Kindle
Pages: 438
Some Desperate Glory is a far-future space... opera? That's probably the right genre classification given the setting, but this book is much more intense and character-focused than most space opera. It is Emily Tesh's first novel, although she has two previous novellas that were published as books. The alien majo and their nearly all-powerful Wisdom have won the war by destroying Earth with an antimatter bomb. The remnants of humanity were absorbed into the sprawling majo civilization. Gaea Station is the lone exception: a marginally viable station deep in space, formed from a lifeless rocky planetoid and the coupled hulks of the last four human dreadnoughts. Gaea Station survives on military discipline, ruthless use of every available resource, and constant training, raising new generations of soldiers for the war that it refuses to let end. While Earth's children live, the enemy shall fear us. Kyr is a warbreed, one of a genetically engineered line of soldiers that, following an accident, Gaea Station has lost the ability to make except the old-fashioned way. Among the Sparrows, her mess group, she is the best at the simulated combat exercises they use for training. She may be the best of her age cohort except her twin Magnus. As this novel opens, she and the rest of the Sparrows are about to get their adult assignments. Kyr is absolutely focused on living up to her potential and the attention of her uncle Jole, the leader of the station. Kyr's future will look nothing like what she expects. This book was so good, and I despair of explaining why it was so good without unforgivable spoilers. I can tell you a few things about it, but be warned that I'll be reduced to helpless gestures and telling you to just go read it. It's been a very long time since I was this surprised by a novel, possibly since I read Code Name: Verity for the first time. Some Desperate Glory follows Kyr in close third-person throughout the book, which makes the start of this book daring. If you're getting a fascist vibe from the setup, you're not wrong, and this is intentional on Tesh's part. But Kyr is a true believer at the start of the book, so the first quarter has a protagonist who is sometimes nasty and cruel and who makes some frustratingly bad decisions. Stay with it, though; Tesh knows exactly what she's doing. This is a coming of age story, in a way. Kyr has a lot to learn and a lot to process, and Some Desperate Glory is about that process. But by the middle of part three, halfway through the book, I had absolutely no idea where Tesh was going with the story. She then pulled the rug out from under me, in the best way, at least twice more. Part five of this book is an absolute triumph, the payoff for everything that's happened over the course of the novel, and there is no way I could have predicted it in advance. It was deeply satisfying in that way where I felt like I learned some things along with the characters, and where the characters find a better ending than I could possibly have worked out myself. Tesh does use some world-building trickery, which is at its most complicated in part four. That was the one place where I can point to a few chapters where I thought the world-building got a bit too convenient in order to enable the plot. But it also allows for some truly incredible character work. I can't describe that in detail because it would be a major spoiler, but it's one of my favorite tropes in fiction and Tesh pulls it off beautifully. The character growth and interaction in this book is just so good: deep and complicated and nuanced and thoughtful in a way that revises reader impressions of earlier chapters. The other great thing about this book is that for a 400+ page novel, it moves right along. Both plot and character development is beautifully paced with only a few lulls. Tesh also doesn't belabor conversations. This is a book that provides just the right amount of context for the reader to fully understand what's going on, and then trusts the reader to be following along and moves straight to the next twist. That makes it propulsively readable. I had so much trouble putting this book down at any time during the second half. I can't give any specifics, again because of spoilers, but this is not just a character story. Some Desperate Glory has strong opinions on how to ethically approach the world, and those ethics are at the center of the plot. Unlike a lot of books with a moral stance, though, this novel shows the difficulty of the work of deriving that moral stance. I have rarely read a book that more perfectly captures the interior experience of changing one's mind with all of its emotional difficulty and internal resistance. Tesh provides all the payoff I was looking for as a reader, but she never makes it easy or gratuitous (with the arguable exception of one moment at the very end of the book that I think some people will dislike but that I personally needed). This is truly great stuff, probably the best science fiction novel that I've read in several years. Since I read it (I'm late on reviews again), I've pushed it on several other people, and I've not had a miss yet. The subject matter is pretty heavy, and this book also uses several tropes that I personally adore and am therefore incapable of being objective about, but with those caveats, this gets my highest possible recommendation. Some Desperate Glory is a complete story in one novel with a definite end, although I love these characters so much that I'd happily read their further adventures, even if those are thematically unnecessary. Content warnings: Uh, a lot. Genocide, suicide, sexual assault, racism, sexism, homophobia, misgendering, and torture, and I'm probably forgetting a few things. Tesh doesn't linger on these long, but most of them are on-screen. You may have to brace yourself for this one. Rating: 10 out of 10

18 August 2023

Dirk Eddelbuettel: #43: r2u Faster Than the Alternatives

Welcome to the 43th post in the $R^4 series. And with that, a good laugh. When I set up Sunday s post, I was excited enough about the (indeed exciting !!) topic of r2u via browser or vscode that I mistakenly labeled it as the 41th post. And overlooked the existing 41th post from July! So it really is as if Douglas Adams, Arthur Dent, and, for good measure, Dirk Gently, looked over my shoulder and declared there shall not be a 42th post!! So now we have two 41th post: Sunday s and July s. Back the current topic, which is of course r2u. Earlier this week we had a failure in (an R based) CI run (using a default action which I had not set up). A package was newer in source than binary, so a build from source was attempted. And of course failed as it was a package needing a system dependency to build. Which the default action did not install. I am familiar with the problem via my general use of r2u (or my r-ci which uses it under the hood). And there we use a bspm variable to prefer binary over possibly newer source. So I was curious how one would address this with the default actions. It so happens that the same morning I spotted a StackOverflow question on the same topic, where the original poster had suffered the exact same issue! I offered my approach (via r2u) as a comment and was later notified of a follow-up answer by the OP. Turns our there is a new, more powerful action that does all this, potentially flipping to a newer version and building it, all while using a cache. Now I was curious, and in the evening cloned the repo to study the new approach and compare the new action to what r2u offers. In particular, I was curious if a use of caches would be benficial on repeated runs. A screenshot of the resulting Actions and their times follows. Turns out maybe not so much (yet ?). As the actions page of my cloned comparison repo shows in this screenshot, r2u is consistently faster at always below one minute compared to new entrant at always over two minutes. (I should clarify that the original actions sets up dependencies, then scrapes, and commits. I am timing only the setup of dependencies here.) We can also extract the six datapoints and quickly visualize them. Now, this is of course entirely possibly that not all possible venues for speedups were exploited in how the action setup was setup. If so, please file an issue at the repo and I will try to update accordingly. But for now it seems that a default of setup r2u is easily more than twice as fast as an otherwise very compelling alternative (with arguably much broader scope). However, where r2u choses to play, on the increasingly common, popular and powerful Ubuntu LTS setup, it clearly continues to run circles around alternate approaches. So the saying remains: r2u: fast, easy, reliable. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Originally posted 2023-08-13, minimally edited 2023-08-15 which changed the timestamo and URL.

15 August 2023

Dirk Eddelbuettel: #41: Using r2u in Codespaces

Welcome to the 41th post in the $R^4 series. This post draws on joint experiments first started by Grant building on the lovely work done by Eitsupi as part of our Rocker Project. In short, r2u is an ideal match for Codespaces, a Microsoft/GitHub service to run code locally but in the cloud via browser or Visual Studio Code. This posts co-serves as the README.md in the .devcontainer directory as well as a vignette for r2u. So let us get into it. Starting from the r2u repository, the .devcontainer directory provides a small self-containted file devcontainer.json to launch an executable environment R using r2u. It is based on the example in Grant McDermott s codespaces-r2u repo and reuses its documentation. It is driven by the Rocker Project s Devcontainer Features repo creating a fully functioning R environment for cloud use in a few minutes. And thanks to r2u you can add easily to this environment by installing new R packages in a fast and failsafe way.

Try it out To get started, simply click on the green Code button at the top right. Then select the Codespaces tab and click the + symbol to start a new Codespace. The first time you do this, it will open up a new browser tab where your Codespace is being instantiated. This first-time instantiation will take a few minutes (feel free to click View logs to see how things are progressing) so please be patient. Once built, your Codespace will deploy almost immediately when you use it again in the future. After the VS Code editor opens up in your browser, feel free to open up the examples/sfExample.R file. It demonstrates how r2u enables us install packages and their system-dependencies with ease, here installing packages sf (including all its geospatial dependencies) and ggplot2 (including all its dependencies). You can run the code easily in the browser environment: Highlight or hover over line(s) and execute them by hitting Cmd+Return (Mac) / Ctrl+Return (Linux / Windows). (Both example screenshots reflect the initial codespaces-r2u repo as well as personal scratchspace one which we started with, both of course work here too.) Do not forget to close your Codespace once you have finished using it. Click the Codespaces tab at the very bottom left of your code editor / browser and select Close Current Codespace in the resulting pop-up box. You can restart it at any time, for example by going to https://github.com/codespaces and clicking on your instance.

Extend r2u with r-universe r2u offers fast, easy, reliable access to all of CRAN via binaries for Ubuntu focal and jammy. When using the latter (as is the default), it can be combined with r-universe and its Ubuntu jammy binaries. We demontrates this in a second example file examples/censusExample.R which install both the cellxgene-census and tiledbsoma R packages as binaries from r-universe (along with about 100 dependencies), downloads single-cell data from Census and uses Seurat to create PCA and UMAP decomposition plots. Note that in order run this you have to change the Codespaces default instance from small (4gb ram) to large (16gb ram).

Local DevContainer build Codespaces are DevContainers running in the cloud (where DevContainers are themselves just Docker images running with some VS Code sugar on top). This gives you the very powerful ability to edit locally but run remotely in the hosted codespace. To test this setup locally, simply clone the repo and open it up in VS Code. You will need to have Docker installed and running on your system (see here). You will also need the Remote Development extension (you will probably be prompted to install it automatically if you do not have it yet). Select Reopen in Container when prompted. Otherwise, click the >< tab at the very bottom left of your VS Code editor and select this option. To shut down the container, simply click the same button and choose Reopen Folder Locally . You can always search for these commands via the command palette too (Cmd+Shift+p / Ctrl+Shift+p).

Use in Your Repo To add this ability of launching Codespaces in the browser (or editor) to a repo of yours, create a directory .devcontainers in your selected repo, and add the file .devcontainers/devcontainer.json. You can customize it by enabling other feature, or use the postCreateCommand field to install packages (while taking full advantage of r2u).

Acknowledgments There are a few key plumbing pieces that make everything work here. Thanks to:

Colophon More information about r2u is at its site, and we answered some question in issues, and at stackoverflow. More questions are always welcome! If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Originally posted 2023-08-13, minimally edited 2023-08-15 which changed the timestamo and URL.

13 August 2023

Dirk Eddelbuettel: #41: Using r2u in Codespaces

Welcome to the 41th post in the $R^4 series. This post draws on joint experiments first started by Grant building on the lovely work Eitsupi as part of our Rocker Project. In short, r2u is an ideal match for Codesspaces, a Microsoft/GitHub service to run code locally but in the cloud via browser or Visual Studio Code. This posts co-serves as the README.md in the .devcontainer directory as well as a vignette for r2u. So let us get into it. Starting from the r2u repository, the .devcontainer directory provides a small self-containted file devcontainer.json to launch an executable environment R using r2u. It is based on the example in Grant McDermott s codespaces-r2u repo and reuses its documentation. It is driven by the Rocker Project s Devcontainer Features repo creating a fully functioning R environment for cloud use in a few minutes. And thanks to r2u you can add easily to this environment by installing new R packages in a fast and failsafe way.

Try it out To get started, simply click on the green Code button at the top right. Then select the Codespaces tab and click the + symbol to start a new Codespace. The first time you do this, it will open up a new browser tab where your Codespace is being instantiated. This first-time instantiation will take a few minutes (feel free to click View logs to see how things are progressing) so please be patient. Once built, your Codespace will deploy almost immediately when you use it again in the future. After the VS Code editor opens up in your browser, feel free to open up the examples/sfExample.R file. It demonstrates how r2u enables us install packages and their system-dependencies with ease, here installing packages sf (including all its geospatial dependencies) and ggplot2 (including all its dependencies). You can run the code easily in the browser environment: Highlight or hover over line(s) and execute them by hitting Cmd+Return (Mac) / Ctrl+Return (Linux / Windows). (Both example screenshots reflect the initial codespaces-r2u repo as well as personal scratchspace one which we started with, both of course work here too.) Do not forget to close your Codespace once you have finished using it. Click the Codespaces tab at the very bottom left of your code editor / browser and select Close Current Codespace in the resulting pop-up box. You can restart it at any time, for example by going to https://github.com/codespaces and clicking on your instance.

Extend r2u with r-universe r2u offers fast, easy, reliable access to all of CRAN via binaries for Ubuntu focal and jammy. When using the latter (as is the default), it can be combined with r-universe and its Ubuntu jammy binaries. We demontrates this in a second example file examples/censusExample.R which install both the cellxgene-census and tiledbsoma R packages as binaries from r-universe (along with about 100 dependencies), downloads single-cell data from Census and uses Seurat to create PCA and UMAP decomposition plots. Note that in order run this you have to change the Codespaces default instance from small (4gb ram) to large (16gb ram).

Local DevContainer build Codespaces are DevContainers running in the cloud (where DevContainers are themselves just Docker images running with some VS Code sugar on top). This gives you the very powerful ability to edit locally but run remotely in the hosted codespace. To test this setup locally, simply clone the repo and open it up in VS Code. You will need to have Docker installed and running on your system (see here). You will also need the Remote Development extension (you will probably be prompted to install it automatically if you do not have it yet). Select Reopen in Container when prompted. Otherwise, click the >< tab at the very bottom left of your VS Code editor and select this option. To shut down the container, simply click the same button and choose Reopen Folder Locally . You can always search for these commands via the command palette too (Cmd+Shift+p / Ctrl+Shift+p).

Use in Your Repo To add this ability of launching Codespaces in the browser (or editor) to a repo of yours, create a directory .devcontainers in your selected repo, and add the file .devcontainers/devcontainer.json. You can customize it by enabling other feature, or use the postCreateCommand field to install packages (while taking full advantage of r2u).

Acknowledgments There are a few key plumbing pieces that make everything work here. Thanks to:

Colophon More information about r2u is at its site, and we answered some question in issues, and at stackoverflow. More questions are always welcome! If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 August 2023

Birger Schacht: Another round of rust

A couple of weeks ago I had to undergo surgery, because one of my kidneys malfunctioned. Everything went well and I m on my way to recovery. Luckily the most recent local heat wave was over just shortly after I got home, which made being stuck at home a little easier (not sure yet when I ll be allowed to do sports again, I miss my climbing gym ). At first I did not have that much energy to do computer stuff, but after a week or so I was able to sit in front of the screen for short amounts of time and I started to get into writing Rust code again.

carl The first thing I did was updating carl. I updated all the dependencies and switched the dependency that does coloring from ansi_term, which is unmaintained, to nu-ansi-term. When I then updated the clap dependency to version 4 I realized that clap now depends on the anstyle crate for text styling - so I updated carls coloring code once again so it now uses anstyle, which led to less dependencies overall. Implementing this change I also did some refactoring of the code. carl how also has its own website as well as a subdomain1. I also added a couple of new date properties to carl, namely all weekdays as well as odd and even - this means it is now possible choose a separate color for every weekday and have a rainbow calendar:
screenshot carl
This is included in version 0.1.0 of carl, which I published on crates.io.

typelerate Then I started writing my first game - typelerate. It is a copy of the great typespeed, without the multiplayer support. To describe the idea behind the game, I quote the typespeed website:
Typespeed s idea is ripped from ztspeed (a DOS game made by Zorlim). The Idea behind the game is rather easy: type words that are flying by from left to right as fast as you can. If you miss 10 or more words, game is over.
Instead of the multiplayer support, typelerate works with UTF-8 strings and it also has another game mode: in typespeed you only type whats scrolling via the screen. In typelerate I added the option to have one or more answer strings. One of those has to be typed instead of the word flying across the screen. This lets you implement kind of an question/answer game. To be backwards compatible with the existing wordfiles from typespeed2, the wordfiles for the question/answer games contain comma separated values. The typelerate repository contains wordfiles with Python and Rust keywords as well as wordfiles where you are shown an Emoji and you have to type the corresponding Github shortcode. I m happy to add additional wordfiles (there could be for example math questions ).
screenshot typelerate

marsrover Another commandline game I really like, because I am fascinated by the animated ASCII graphics, is the venerable moon-buggy. In this game you have to drive a vehicle across the moon s surface and deal with obstacles like craters or aliens. I reimplemented the game in rust and called it marsrover:
screenshot marsrover
I published it on crates.io, you can find the repository on github. The game uses a configuration file in $XDG_CONFIG_HOME/marsrover/config.toml - you can configure the colors of the elements as well as the levels. The game comes with four levels predefined, but you can use the configuration file to override that list of levels with levels with your own properties. The level properties define the probabilities of obstacles occuring on your way on the mars surface and a points setting that defines how many points the user can get in that level (=the game switches to the next level if the user reaches the points).
[[levels]]
prob_ditch_one = 0.2
prob_ditch_two = 0.0
prob_ditch_three = 0.0
prob_alien = 0.5
points = 100
After the last level, the game generates new ones on the fly.

  1. thanks to the service from https://cli.rs.
  2. actually, typelerate is not backwards compatible with the typespeed wordfiles, because those are not UTF-8 encoded

4 August 2023

Louis-Philippe V ronneau: pymonitair: Air Quality Monitoring Display with MicroPython

I've never been a fan of IoT devices for obvious reasons: not only do they tend to be excellent at being expensive vendor locked-in machines, but far too often, they also end up turning into e-waste after a short amount of time. Manufacturers can go out of business or simply decide to shut down the cloud servers for older models, and then you're stuck with a brick. Well, this all changes today, as I've built my first IoT device and I love it. Introducing pymonitair. What pymonitair is a MicroPython project that aims to display weather data from a home weather station (like the ones sold by AirGradient) on a small display. The source code was written for the Raspberry Pi Pico W, the Waveshare Pico OLED 1.3 display and the RevolvAir Revo 1 weather station, but can be adapted to other displays and stations easily, as I tried to keep the code as modular as possible. The general MicroPython code itself isn't specific to the Raspberry Pi Pico and shouldn't need to be modified for other boards. pymonitair features: Here's a demo of me scrolling through the different pages and (somewhat failing) to turn the screen on and off: Why? If you follow my blog, you'll know that my last entry was about building a set of tools to collect and graph data from a weather station my neighbor set up. Why on Earth would I need a separate device to show this data, when the website I've built works perfectly fine and is accessible on any computer or smartphone? Mostly alerts. When the air quality here dropped following forest fires, I found out keeping track of if I had to close my windows and bunker down was quite a hassle. Air quality would degrade during the day and I would only notice it hours later. With the pymonitair, I'll have a little screen flashing angrily at me whenever this happens. A simpler solution would probably have been to forgo hardware altogether and code some icinga2 alert to ping me over Signal whenever the air quality got bad. Hacking on pymonitair was mostly a way to learn to use MicroPython and familiarize myself with this type of embedded hardware device. I'll surely blog about this later this year, but I plan to use a very similar stack to mod my apartment's HVAC unit to stop pulling air from outside when an air quality sensor detects cigarette smoke (or bad air quality in general). Things I've learnt This project was super fun and taught me many things:

  1. PM1, PM2.5, PM10, Temperature, Humidity and Pressure
  2. Part of the screen will flash repeatedly
  3. I did look for other solutions to transfer files to the board, but none of them were actually maintained. I nearly finished packaging ampy before realising it was officially unmaintained and its main alternative, rshell, has had its last release in December 2021. When I caught myself seriously considering writing a script to transfer files over the serial link, I gave up and decided thonny was not that bad after all.

31 July 2023

Paul Wise: FLOSS Activities July 2023

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Debugging

Review

Administration
  • Debian IRC: rescue empty Debian IRC channel
  • Debian wiki: unblock IP addresses, approve accounts

Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors The libpst, nmap, sptag, pytest-rerunfailures work was sponsored. All other work was done on a volunteer basis.

26 July 2023

Shirish Agarwal: Manipur Violence, Drugs, Binging on Northshore, Alaska Daily, Doogie Kamealoha and EU Digital Resilence Act.

Manipur Videos Warning: The text might be mature and will have references to violence so if there are kids or you are sensitive, please excuse. Few days back, saw the videos and I cannot share the rage, shame and many conflicting emotions that were going through me. I almost didn t want to share but couldn t stop myself. The woman in the video were being palmed, fingered, nude, later reportedly raped and murdered. And there have been more than a few cases. The next day saw another video that showed beheaded heads, and Kukis being killed just next to their houses. I couldn t imagine what those people must be feeling as the CM has been making partisan statements against them. One of the husbands of the Kuki women who had been paraded, fondled is an Army Officer in the Indian Army. The Meiteis even tried to burn his home but the Army intervened and didn t let it get burnt. The CM s own statement as shared before tells his inability to bring the situation out of crisis. In fact, his statement was dumb stating that the Internet shutdown was because there were more than 100 such cases. And it s spreading to the nearby Northeast regions. Now Mizoram, the nearest neighbor is going through similar things where the Meitis are not dominant. The Mizos have told the Meitis to get out. To date, the PM has chosen not to visit Manipur. He just made a small 1 minute statement about it saying how the women have shamed India, an approximation of what he said.While it s actually not the women but the men who have shamed India. The Wire has been talking to both the Meitis, the Kukis, the Nagas. A Kuki women sort of bared all. She is right on many counts. The GOI while wanting to paint the Kukis in a negative light have forgotten what has been happening in its own state, especially its own youth as well as in other states while also ignoring the larger geopolitics and business around it. Taliban has been cracking as even they couldn t see young boys, women becoming drug users. I had read somewhere that 1 in 4 or 1 in 5 young person in Afghanistan is now in its grip. So no wonder,the Taliban is trying to eradicate and shutdown drug use among it s own youth. Circling back to Manipur, I was under the wrong impression that the Internet shutdown is now over. After those videos became viral as well as the others I mentioned, again the orders have been given and there is shutdown. It is not fully shut but now only Govt. offices have it. so nobody can share a video that goes against any State or Central Govt. narrative  A real sad state of affairs  Update: There is conditional reopening whatever that means  When I saw the videos, the first thing is I felt was being powerless, powerless to do anything about it. The second was if I do not write about it, amplify it and don t let others know about it then what s the use of being able to blog

Mental Health, Binging on various Webseries Both the videos shocked me and I couldn t sleep that night or the night after. it. Even after doing work and all, they would come in unobtrusively in my nightmares  While I felt a bit foolish, I felt it would be nice to binge on some webseries. Little I was to know that both Northshore and Alaska Daily would have stories similar to what is happening here  While the story in Alaska Daily is fictional it resembles very closely to a real newspaper called Anchorage Daily news. Even there the Intuit women , one of the marginalized communities in Alaska. The only difference I can see between GOI and the Alaskan Government is that the Alaskan Government was much subtle in doing the same things. There are some differences though. First, the State is and was responsive to the local press and apart from one close call to one of its reporters, most reporters do not have to think about their own life in peril. Here, the press cannot look after either their livelihood or their life. It was a juvenile kid who actually shot the video, uploaded and made it viral. One needs to just remember the case details of Siddique Kappan. Just for sharing the news and the video he was arrested. Bail was denied to him time and time again citing that the Police were investigating . Only after 2 years and 3 months he got bail and that too because none of the charges that the Police had they were able to show any prima facie evidence. One of the better interviews though was of Vrinda Grover. For those who don t know her, her Wikipedia page does tell a bit about her although it is woefully incomplete. For example, most recently she had relentlessly pursued the unconstitutional Internet Shutdown that happened in Kashmir for 5 months. Just like in Manipur, the shutdown was there to bury crimes either committed or being facilitated by the State. For the issues of livelihood, one can take the cases of Bipin Yadav and Rashid Hussain. Both were fired by their employer Dainik Bhaskar because they questioned the BJP MP Smriti Irani what she has done for the state. The problems for Dainik Bhaskar or for any other mainstream media is most of them rely on Government advertisements. Private investment in India has fallen to record lows mostly due to the policies made by the Centre. If any entity or sector grows a bit then either Adani or Ambani will one way or the other take it. So, for most first and second generation entrepreneurs it doesn t make sense to grow and then finally sell it to one of these corporates at a loss  GOI on Adani, Ambani side of any deal. The MSME sector that is and used to be the second highest employer hasn t been able to recover from the shocks of demonetization, GST and then the pandemic. Each resulting in more and more closures and shutdowns. Most of the joblessness has gone up tremendously in North India which the Government tries to deny. The most interesting points in all those above examples is within a month or less, whatever the media reports gets scrubbed. Even the firing of the journos that was covered by some of the mainstream media isn t there anymore. I have to use secondary sources instead of primary sources. One can think of the chilling effects on reportage due to the above. The sad fact is even with all the money in the world the PM is unable to come to the Parliament to face questions.
Why is PM not answering in Parliament,, even Rahul Gandhi is not there - Surya Pratap Singh, prev. IAS Officer.
The above poster/question is by Surya Pratap Singh, a retired IAS officer. He asks why the PM is unable to answer in either of the houses. As shared before, the Govt. wants very limited discussion. Even yesterday, the Lok Sabha TV just showed the BJP MP s making statements but silent or mic was off during whatever questions or statements made by the opposition. If this isn t mockery of Indian democracy then I don t know what is  Even the media landscape has been altered substantially within the last few years. Both Adani and Ambani have distributed the media pie between themselves. One of the last bastions of the free press, NDTV was bought by Adani in a hostile takeover. Both Ambani and Adani are close to this Goverment. In fact, there is no sector in which one or the other is not present. Media houses like Newsclick, The Wire etc. that are a fraction of mainstream press are where most of the youth have been going to get their news as they are not partisan. Although even there, GOI has time and again interfered. The Wire has had too many 504 Gateway timeouts in the recent months and they had been forced to move most of their journalism from online to video, rather Youtube in order to escape both the censoring and the timeouts as shared above. In such a hostile environment, how both the organizations are somehow able to survive is a miracle. Most local reportage is also going to YouTube as that s the best way for them to not get into Govt. censors. Not an ideal situation, but that s the way it is. The difference between Indian and Israeli media can be seen through this
The above is a Screenshot shared by how the Israeli media has reacted to the Israeli Government s Knesset over the judicial overhaul . Here, the press itself erodes its own by giving into the Government day and night

Binging on Webseries Saw Northshore, Three Pines, Alaska Daily and Doogie Kamealoha M.D. which is based on Doogie Howser M.D. Of the four, enjoyed Doogie Kamealoha M.D. the most but then it might be because it s a copy of Doogie Howser, just updated to the new millenia and there are some good childhood memories associated with that series. The others are also good. I tried to not see European stuff as most of them are twisted and didn t want that space.

EU Digital Operational Resilience Act and impact on FOSS Few days ago, apparently the EU shared the above Act. One can read about it more here. This would have more impact on FOSS as most development of various FOSS distributions happens in EU. Fair bit of Debian s own development happens in Germany and France. While there have been calls to make things more clearer, especially for FOSS given that most developers do foss development either on side or as a hobby while their day job is and would be different. The part about consumer electronics and FOSS is a tricky one as updates can screw up your systems. Microsoft has had a huge history of devices not working after an update or upgrade. And this is not limited to Windows as they would like to believe. Even apple seems to be having its share of issues time and time again. One would have hoped that these companies that make billions of dollars from their hardware and software sales would be doing more testing and Q&A and be more aware about security issues. FOSS, on the other hand while being more responsive doesn t make as much money vis-a-vis the competitors. Let s take the most concrete example. The most successful mobile phone having FOSS is Purism. But it s phone, it has priced itself out of the market. A huge part of that is to do with both economies of scale and trying to get an infrastructure and skills in the States where none or minimally exists. Compared that to say Pinepro that is manufactured in Hong Kong and is priced 1/3rd of the same. For most people it is simply not affordable in these times. Add to that the complexity of these modern cellphones make it harder, not easier for most people to be vigilant and update the phone at all times. Maybe we need more dumphones such as Light and Punkt but then can those be remotely hacked or not, there doesn t seem to be any answers on that one. I haven t even seen anybody even ask those questions. They may have their own chicken and egg issues. For people like me who have lost hearing, while I can navigate smartphones for now but as I become old I don t see anything that would help me. For many an elderly population, both hearing and seeing are the first to fade. There doesn t seem to be any solutions targeted for them even though they are 5-10% of any population at the very least. Probably more so in Europe and the U.S. as well as Japan and China. All of them are clearly under-served markets but dunno a solution for them. At least to me that s an open question.

Next.

Previous.